42 research outputs found

    Salience-based selection: attentional capture by distractors less salient than the target

    Get PDF
    Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience

    Properties of V1 Neurons Tuned to Conjunctions of Visual Features: Application of the V1 Saliency Hypothesis to Visual Search behavior

    Get PDF
    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target

    Cortical Surround Interactions and Perceptual Salience via Natural Scene Statistics

    Get PDF
    Spatial context in images induces perceptual phenomena associated with salience and modulates the responses of neurons in primary visual cortex (V1). However, the computational and ecological principles underlying contextual effects are incompletely understood. We introduce a model of natural images that includes grouping and segmentation of neighboring features based on their joint statistics, and we interpret the firing rates of V1 neurons as performing optimal recognition in this model. We show that this leads to a substantial generalization of divisive normalization, a computation that is ubiquitous in many neural areas and systems. A main novelty in our model is that the influence of the context on a target stimulus is determined by their degree of statistical dependence. We optimized the parameters of the model on natural image patches, and then simulated neural and perceptual responses on stimuli used in classical experiments. The model reproduces some rich and complex response patterns observed in V1, such as the contrast dependence, orientation tuning and spatial asymmetry of surround suppression, while also allowing for surround facilitation under conditions of weak stimulation. It also mimics the perceptual salience produced by simple displays, and leads to readily testable predictions. Our results provide a principled account of orientation-based contextual modulation in early vision and its sensitivity to the homogeneity and spatial arrangement of inputs, and lends statistical support to the theory that V1 computes visual salience

    Mitochondrial ATP synthase: architecture, function and pathology

    Get PDF
    Human mitochondrial (mt) ATP synthase, or complex V consists of two functional domains: F1, situated in the mitochondrial matrix, and Fo, located in the inner mitochondrial membrane. Complex V uses the energy created by the proton electrochemical gradient to phosphorylate ADP to ATP. This review covers the architecture, function and assembly of complex V. The role of complex V di-and oligomerization and its relation with mitochondrial morphology is discussed. Finally, pathology related to complex V deficiency and current therapeutic strategies are highlighted. Despite the huge progress in this research field over the past decades, questions remain to be answered regarding the structure of subunits, the function of the rotary nanomotor at a molecular level, and the human complex V assembly process. The elucidation of more nuclear genetic defects will guide physio(patho)logical studies, paving the way for future therapeutic interventions

    Tuning Perceptual Competition

    No full text

    Visual straight-ahead preference in saccadic eye movements

    No full text
    International audienceOcular saccades bringing the gaze toward the straight-ahead direction (centripetal) exhibit higher dynamics than those steering the gaze away (centrifugal). This is generally explained by oculomotor determinants: centripetal saccades are more efficient because they pull the eyes back toward their primary orbital position. However, visual determinants might also be invoked: elements located straight-ahead trigger saccades more efficiently because they receive a privileged visual processing. Here, we addressed this issue by using both pro- and anti-saccade tasks in order to dissociate the centripetal/centrifugal directions of the saccades, from the straight-ahead/eccentric locations of the visual elements triggering those saccades. Twenty participants underwent alternating blocks of pro- and anti-saccades during which eye movements were recorded binocularly at 1 kHz. The results confirm that centripetal saccades are always executed faster than centrifugal ones, irrespective of whether the visual elements have straight-ahead or eccentric locations. However, by contrast, saccades triggered by elements located straight-ahead are consistently initiated more rapidly than those evoked by eccentric elements, irrespective of their centripetal or centrifugal direction. Importantly, this double dissociation reveals that the higher dynamics of centripetal pro-saccades stem from both oculomotor and visual determinants, which act respectively on the execution and initiation of ocular saccades
    corecore